The basic indicators of a researcher's productivity and impact are still thenumber of publications and their citation counts. These metrics are clear,straightforward, and easy to obtain. When a ranking of scholars is needed, forinstance in grant, award, or promotion procedures, their use is the fastest andcheapest way of prioritizing some scientists over others. However, due to theirnature, there is a danger of oversimplifying scientific achievements.Therefore, many other indicators have been proposed including the usage of thePageRank algorithm known for the ranking of webpages and its modificationssuited to citation networks. Nevertheless, this recursive method iscomputationally expensive and even if it has the advantage of favouringprestige over popularity, its application should be well justified,particularly when compared to the standard citation counts. In this study, weanalyze three large datasets of computer science papers in the categories ofartificial intelligence, software engineering, and theory and methods and apply12 different ranking methods to the citation networks of authors. We comparethe resulting rankings with self-compiled lists of outstanding researchersselected as frequent editorial board members of prestigious journals in thefield and conclude that there is no evidence of PageRank-based methodsoutperforming simple citation counts.
展开▼